A Novel Mixed Precision Distributed TPU GAN for Accelerated Learning Curve
نویسندگان
چکیده
Deep neural networks are gaining importance and popularity in applications services. Due to the enormous number of learnable parameters datasets, training is computationally costly. Parallel distributed computation-based strategies used accelerate this process. Generative Adversarial Networks (GAN) a recent technological achievement deep learning. These generative models expensive because GAN consists two trains on datasets. Typically, trained single server. Conventional learning accelerator designs challenged by unique properties GAN, like computation stages with non-traditional convolution layers. This work addresses issue distributing GANs so that they can train datasets over many TPUs (Tensor Processing Unit). Distributed accelerates process decreases time. In paper, Network accelerated using multi-core TPU data-parallel synchronous model. For adequate acceleration network, data parallel SGD (Stochastic Gradient Descent) model implemented TensorFlow mixed precision, bfloat16, XLA (Accelerated Linear Algebra). The study was conducted MNIST dataset for varying batch sizes from 64 512 30 epochs v3 128 × systolic array. An extensive technique bfloat16 decrease storage cost speed up floating-point computations. curve generator discriminator network obtained. time reduced 79% size TPU.
منابع مشابه
GPU-Accelerated Asynchronous Error Correction for Mixed Precision Iterative Refinement
In hardware-aware high performance computing, blockasynchronous iteration and mixed precision iterative refinement are two techniques that are applied to leverage the computing power of SIMD accelerators like GPUs. Although they use a very different approach for this purpose, they share the basic idea of compensating the convergence behaviour of an inferior numerical algorithm by a more efficie...
متن کاملA Novel Reconfiguration Mixed with Distributed Generation Planning via Considering Voltage Stability Margin
In recent years, in Iran and other countries the power systems are going to move toward creating a competition structure for selling and buying electrical energy. These changes and the numerous advantages of DGs have made more incentives to use these kinds of generators than before. Therefore, it is necessary to study all aspects of DGs, such as size selection and optimal placement and impact o...
متن کاملSPFP: Speed without compromise - A mixed precision model for GPU accelerated molecular dynamics simulations
A new precision model is proposed for the acceleration of all-atom classical molecular dynamics (MD) simulations on graphics processing units (GPUs). This precision model replaces double precision arithmetic with fixed point integer arithmetic for the accumulation of force components as compared to a previously introduced model that uses mixed single/double precision arithmetic. This significan...
متن کاملa novel reconfiguration mixed with distributed generation planning via considering voltage stability margin
in recent years, in iran and other countries the power systems are going to move toward creating a competition structure for selling and buying electrical energy. these changes and the numerous advantages of dgs have made more incentives to use these kinds of generators than before. therefore, it is necessary to study all aspects of dgs, such as size selection and optimal placement and impact o...
متن کاملMixed Low-precision Deep Learning Inference using Dynamic Fixed Point
We propose a cluster-based quantization method to convert pre-trained full precision weights into ternary weights with minimal impact on the accuracy. In addition we also constrain the activations to 8-bits thus enabling sub 8-bit full integer inference pipeline. Our method uses smaller clusters of N filters with a common scaling factor to minimize the quantization loss, while also maximizing t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Computer systems science and engineering
سال: 2023
ISSN: ['0267-6192']
DOI: https://doi.org/10.32604/csse.2023.034710